252 research outputs found

    Non-visual information display using tactons

    Get PDF
    This paper describes a novel form of display using tactile output. Tactons, or tactile icons, are structured tactile messages that can be used to communicate message to users non visually. A range of different parameters can be used to construct Tactons, e.g.: frequency, amplitude, waveform and duration of a tactile pulse, plus body location. Tactons have the potential to improve interaction in a range of different areas, particularly where the visual display is overloaded, limited in size or not available, such as interfaces for blind people or on mobile and wearable devices

    The design and evaluation of a sonically enhanced tool palette

    Get PDF
    This paper describes an experiment to investigate the effectiveness of adding sound to tool palettes. Palettes have usability problems because users need to see the information they present, but they are often outside the area of visual focus. We used nonspeech sounds called earcons to indicate the current tool and when tool changes occurred so that users could tell what tool they were in wherever they were looking. Results showed a significant reduction in the number of tasks performed with the wrong tool. Therefore, users knew what the current tool was and did not try to perform tasks with the wrong one. All of this was not at the expense of making the tool palettes any more annoying to use

    Multi-Moji: Combining Thermal, Vibrotactile and Visual Stimuli to Expand the Affective Range of Feedback

    Get PDF
    This paper explores the combination of multiple concurrent modalities for conveying emotional information in HCI: temperature, vibration and abstract visual displays. Each modality has been studied individually, but can only convey a limited range of emotions within two-dimensional valencearousal space. This paper is the first to systematically combine multiple modalities to expand the available affective range. Three studies were conducted: Study 1 measured the emotionality of vibrotactile feedback by itself; Study 2 measured the perceived emotional content of three bimodal combinations: vibrotactile + thermal, vibrotactile + visual and visual + thermal. Study 3 then combined all three modalities. Results show that combining modalities increases the available range of emotional states, particularly in the problematic top-right and bottom-left quadrants of the dimensional model. We also provide a novel lookup resource for designers to identify stimuli to convey a range of emotions

    Touching the invisible: Localizing ultrasonic haptic cues

    Get PDF
    While mid-air gestures offer new possibilities to interact with or around devices, some situations, such as interacting with applications, playing games or navigating, may require visual attention to be focused on a main task. Ultrasonic haptic feedback can provide 3D spatial haptic cues that do not demand visual attention for these contexts. In this paper, we present an initial study of active exploration of ultrasonic haptic virtual points that investigates the spatial localization with and without the use of the visual modality. Our results show that, when providing haptic feedback giving the location of a widget, users perform 50% more accurately compared to providing visual feedback alone. When provided with a haptic location of a widget alone, users are more than 30% more accurate than when given a visual location. When aware of the location of the haptic feedback, active exploration decreased the minimum recommended widget size from 2cm2 to 1cm2 when compared to passive exploration from previous studies. Our results will allow designers to create better mid-air interactions using this new form of haptic feedback

    Using non-speech sounds to provide navigation cues

    Get PDF
    This article describes 3 experiments that investigate the possibiity of using structured nonspeech audio messages called earcons to provide navigational cues in a menu hierarchy. A hierarchy of 27 nodes and 4 levels was created with an earcon for each node. Rules were defined for the creation of hierarchical earcons at each node. Participants had to identify their location in the hierarchy by listening to an earcon. Results of the first experiment showed that participants could identify their location with 81.5% accuracy, indicating that earcons were a powerful method of communicating hierarchy information. One proposed use for such navigation cues is in telephone-based interfaces (TBIs) where navigation is a problem. The first experiment did not address the particular problems of earcons in TBIs such as “does the lower quality of sound over the telephone lower recall rates,” “can users remember earcons over a period of time.” and “what effect does training type have on recall?” An experiment was conducted and results showed that sound quality did lower the recall of earcons. However; redesign of the earcons overcame this problem with 73% recalled correctly. Participants could still recall earcons at this level after a week had passed. Training type also affected recall. With personal training participants recalled 73% of the earcons, but with purely textual training results were significantly lower. These results show that earcons can provide good navigation cues for TBIs. The final experiment used compound, rather than hierarchical earcons to represent the hierarchy from the first experiment. Results showed that with sounds constructed in this way participants could recall 97% of the earcons. These experiments have developed our general understanding of earcons. A hierarchy three times larger than any previously created was tested, and this was also the first test of the recall of earcons over time

    Mirror, Mirror, On The Wall: Collaborative Screen-Mirroring for Small Groups

    Get PDF
    Screen mirroring has been available to consumers for some time, however if every mobile device in the room supports screen mirroring to the main display (e.g. a shared TV), this necessitates a mechanism for managing its use. As such, this paper investigates allowing users in small intimacy groups (friends, family etc.) to self-manage mirrored use of the display, through passing/taking/requesting the display from whomever is currently mirroring to it. We examine the collaborative benefits this scheme could provide for the home, compared to existing multi-device use and existing screen mirroring implementations. Results indicate shared screen mirroring improves perceived collaboration, decreases dominance, preserves independence and has a positive effect on a group's activity awareness

    I Am The Passenger: How Visual Motion Cues Can Influence Sickness For In-Car VR

    Get PDF
    This paper explores the use of VR Head Mounted Displays (HMDs) in-car and in-motion for the first time. Immersive HMDs are becoming everyday consumer items and, as they offer new possibilities for entertainment and productivity, people will want to use them during travel in, for example, autonomous cars. However, their use is confounded by motion sickness caused in-part by the restricted visual perception of motion conflicting with physically perceived vehicle motion (accelerations/rotations detected by the vestibular system). Whilst VR HMDs restrict visual perception of motion, they could also render it virtually, potentially alleviating sensory conflict. To study this problem, we conducted the first on-road and in motion study to systematically investigate the effects of various visual presentations of the real-world motion of a car on the sickness and immersion of VR HMD wearing passengers. We established new baselines for VR in-car motion sickness, and found that there is no one best presentation with respect to balancing sickness and immersion. Instead, user preferences suggest different solutions are required for differently susceptible users to provide usable VR in-car. This work provides formative insights for VR designers and an entry point for further research into enabling use of VR HMDs, and the rich experiences they offer, when travelling

    Work-In-Progress Technical Report: Designing A Two-User, Two-View TV Display

    Get PDF
    This work-in-progress paper previews how we can design interfaces and interactions for multi-view TVs, enabling users to transition between independent and shared activities, gain casual awareness of others’ activities, and collaborate more effectively. We first compare an Android-based multi-user TV against both multi-screen and multi-view TVs in a collaborative movie browsing task, to determine whether multiview can improve collaboration, and what level of awareness of each other’s activity users choose. Based on our findings, we iterate on our multi-view design in a second study, giving users the ability to transition between casual and focused modes of engagement, and dynamically set their engagement with other users’ activities. This research demonstrates that the shared focal point of the TV now has the capability to facilitate both collaborative and completely independent activity

    Bimodal Feedback for In-car Mid-air Gesture Interaction

    Get PDF
    This demonstration showcases novel multimodal feedback designs for in-car mid-air gesture interaction. It explores the potential of multimodal feedback types for mid-air gestures in cars and how these can reduce eyes-off-the-road time thus make driving safer. We will show four different bimodal feedback combinations to provide effective information about interaction with systems in a car. These feedback techniques are visual-auditory, auditory-ambient (peripheral vision), ambient-tactile, and tactile-auditory. Users can interact with the system after a short introduction, creating an exciting opportunity to deploy these displays in cars in the future

    Facilitating Humanitarian Access to Pharmaceutical and Agricultural Innovation

    Get PDF
    Calls for intellectual property licensing strategies in the pharmaceutical and agricultural sectors that promote humanitarian access to product innovations for the benefit of the disadvantaged. Includes profiles of successful and promising strategies
    • …
    corecore